830 research outputs found

    Búsqueda rápida de caminos en grafos de alta cardinalidad estáticos y dinámicos

    Get PDF
    Los grafos han sido empleados en la resolución de problemas complejos de distinta índole desde su aparición. Para ello, toda la información asociada al dominio debe ser representada mediante nodos y relaciones entre nodos (enlaces), y posteriormente aplicar diversas operaciones sobre el grafo creado. Entre todas ellas, la búsqueda de caminos es una de las más frecuentes y permite resolver problemas de distinta naturaleza. Debido a la frecuencia con la que se resuelven gran cantidad de problemas aplicando búsquedas de caminos, existen multitud de trabajos que han aportado al estado del arte algoritmos para realizar dicha tarea. Sin embargo, debido a la constante aparición de nuevos requisitos a tener en cuenta en los grafos sobre los que se realizan las búsquedas de caminos, siguen surgiendo nuevas propuestas impulsadas por la evolución de las necesidades, que añaden nuevos matices al problema. Entre estas nuevas características, merecen especial atención aquellas relacionadas con el continuo crecimiento de los grafos; el hecho de que tanto la estructura de los mismos como los costes asociados a los nodos y enlaces varían con el tiempo; y la aparición de nuevas topologías derivadas del tamaño de los grafos como es el caso de las Small-World Networks. Además, a todo esto hay que unir el que cada vez existen más limitaciones en lo que a tiempo de respuesta se refiere, pues hay gran cantidad de aplicaciones en tiempo real, o de tiempo limitado, convirtiéndose este parámetro en prioritario y situándose por encima de la obtención de los caminos con el coste óptimo. Después de hacer un estudio del estado del arte, se ha encontrado que no existen trabajos que busquen caminos de una manera rápida sobre grafos de alta cardinalidad (a partir de este momento grafos vastos por simplicidad) dinámicos con estructura genérica. Por este motivo, en este trabajo de tesis doctoral se plantea una propuesta que se apoya en los algoritmos basados en colonias de hormigas (ACO - Ant Colony Optimization) para cubrir este hueco, de manera que los objetivos del mismo se pueden resumir en: Tiempo de respuesta reducido, grafos vastos dinámicos, y topología genérica. El elegir este tipo de algoritmos como base para la propuesta, se debe a su demostrada capacidad de adaptación a entornos dinámicos así como a su eficacia a la hora de encontrar caminos entre nodos. Sin embargo, este tipo de algoritmos se ha aplicado a grafos de, como mucho, unos cuantos millares de nodos, pues en grafos mayores el comportamiento presentado no es correcto: las hormigas no encuentran el destino, o en caso de hacerlo, la calidad del camino obtenido se encuentra muy por debajo del valor óptimo. Para permitir su utilización en grafos mayores, hay propuestas que incorporan un preprocesado del grafo de manera que la búsqueda se realiza en fragmentos del grafo principal en lugar de en el grafo completo. El problema de este preprocesado es que hace que se pierda la adaptabilidad propia de este tipo de algoritmos, pues cada vez que se produce un cambio, la búsqueda debe detenerse y esperar a que se ejecute de nuevo todo el preprocesado. Con el fin de proponer un algoritmo capaz de alcanzar todos los objetivos analizados anteriormente, la propuesta presentada en este trabajo de tesis doctoral incorpora una ayuda a la búsqueda del camino que en ningún momento modifica la estructura del grafo y que está basada en la forma en la que los animales consiguen reducir el espacio de búsqueda para encontrar la comida empleando su sentido del olfato. Para ello, se ha incluido al algoritmo lo que se ha denominado Nodos Comida y Olor a Comida, pasando el algoritmo de ser ACO (Ant Colony Optimization) a ser SoSACO (Sense of Smell ACO). Junto a esta ayuda, se ha seleccionado un sistema gestor de base de datos para almacenar y gestionar toda la información con la que es necesario tratar debido a la cantidad de la misma, a la necesidad de permitir un acceso concurrente a ella, y a la potente gestión de la información que proporciona y que permite mejorar los tiempos de acceso. Además, el emplear un sistema gestor de base de datos va a permitir dividir el problema de la búsqueda de caminos en grandes grafos en dos, independizando la tarea de buscar un camino de la de manejar una cantidad de datos masiva, encargándose el sistema gestor de la segunda tarea y permitiendo que este trabajo se tenga que ocupar únicamente de encontrar un algoritmo de búsqueda adecuado a grandes grafos dinámicos. Con respecto a las fases sobre las que se apoya el algoritmo propuesto, se pueden resumir de la siguiente manera: Elección de los Nodos Comida: Se elegirán de entre los nodos del grafo basándose en la frecuencia con la que se utiliza el nodo como nodo extremo de un camino; en función de la distribución del grafo, permitiendo que el mismo quede divido en zonas; en función del grado del nodo; o en función de la frecuencia de tránsito del nodo. Una vez elegidos, podrán emplearse como puntos de encuentro de hormigas (en caso de que el nodo comida sea distinto al nodo destino), de manera que cuando una hormiga llegue a uno de ellos, si tiene como nodo destino el origen de otra hormiga que alcanzó dicha fuente de alimento con anterioridad, podrá obtener directamente el camino global uniendo ambos tramos. Olor a Comida: Este nuevo parámetro tendrá como función simular el olor que desprende cualquier fuente de alimento en la naturaleza creando un área alrededor de ella dentro de la cual se puede saber, mediante el empleo del olfato, donde se encuentra el alimento. Esta característica no tendrá nada que ver con la feromona, ya que, mientras que esta última es empleada por las hormigas para comunicarse entre ellas de manera indirecta, el olor solo será utilizado por las hormigas en su búsqueda a modo de ayuda. Dispersión de Olor Inicial: Se dispersará el olor de los Nodos Comida a los nodos en torno a ellos, de modo que el olor máximo se encontrará en los nodos comida, y a los nodos en torno a ellos se les asignará una cantidad que se verá reducida según el nodo esté más alejado de la fuente de olor. Además, se fijará un umbral a partir del cual ya no se asignará olor al nodo, limitando de esta manera el tamaño de las áreas con olor y siendo necesario encontrar un punto de equilibrio entre el número de nodos comida y el umbral. Dispersión de Olor Radial: Se realizará cuando un nodo comida sea empleado para ir del nodo inicio al destino. Esto trata de imitar el hecho de que cuando una hormiga real coge un trozo de comida, el mismo tiene un olor asociado que se dispersará por el camino empleado por la hormiga para transportarlo. Este hecho implicará que el tamaño inicial de las áreas de olor entorno a los nodos comida se verá extendido rápidamente, especialmente en nodos habitualmente transitados, gracias a las distintas búsquedas realizadas por las hormigas. El olor se dispersará disminuyendo su valor en función de la lejanía al nodo comida y la condición de parada será que el olor a asignar sea cero. Gracias a todo este proceso, la hormiga buscará su nodo destino utilizando el rastro de feromona creado por otras hormigas de manera probabilística, manteniendo su capacidad de adaptación a cambios gracias a este proceso aleatorio, y si durante la búsqueda se encuentra con algún rastro de olor, podrá utilizarlo para ir hasta el nodo comida (siguiendo el rastro creciente de olor) y, en caso de no ser el destino, emplearlo a modo de punto de encuentro. De lo explicado anteriormente se puede apreciar que la incorporación realizada al algoritmo no modifica la estructura del grafo ni cambia la manera de buscar habitual de las hormigas, por lo que si aparece un cambio que afecte a los olores y que implique tener que volver a realizar la dispersión, o borrar un área ya creada, las hormigas no se verán interrumpidas y podrán proseguir con su búsqueda mientras se realice aquello que sea necesario para restaurar los olores. Además, puesto que el encontrar el camino empleando el rastro de olor únicamente implica seguirlo en sentido creciente, la primera aproximación de camino encontrado será mostrada rápidamente, cumpliendo de esta manera el objetivo prioritario de la presente tesis: minimizar el tiempo de respuesta. Con el fin de cumplir con el resto de objetivos, y disminuir en mayor medida el tiempo empleado para la obtención de una primera respuesta, la propuesta fue evolucionando a través de una consecución de ciclos en los que se fueron fijando valores para determinados parámetros del algoritmo (gracias a la ejecución de una serie de experimentos llevados a cabo sobre grafos vastos), así como incorporando otros nuevos que permitiesen solucionar problemas detectados en ciclos anteriores e incluir las nuevas restricciones. Es decir, se siguió un método de desarrollo iterativo incremental. Respecto a los ciclos de evolución del algoritmo, se basaron en la aplicación del algoritmo a grafos vastos estáticos con un único nodo comida, estáticos con varios nodos comida, dinámicos con varios nodos comida, y estáticos con varios nodos comida pero con un preprocesado de caminos entre los nodos comida y un mantenimiento de los mismos (aplicando el algoritmo propuesto de manera paralela a las búsquedas de caminos solicitadas) con el fin de reducir el tiempo de respuesta, y mejorar de manera secundaria la calidad de los caminos solicitados. Después de realizar todos estos ciclos de evolución, y observar los resultados obtenidos en los experimentos llevados a cabo en cada uno de ellos, se pudo concluir que se ha conseguido un algoritmo capaz de reducir la tasa de pérdida de las hormigas así como el tiempo de respuesta del algoritmo, de adaptarse a los cambios que se puedan producir en el grafo, y de obtener unos costes de caminos dentro de un rango de calidad determinado cuando ejecuta sobre grafos vastos. Es decir, se han satisfecho todos los objetivos iniciales del presente trabajo de tesis doctoral. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Graphs have been employed to resolve different kinds of complex problems since they first appeared. In order to do so, all the information associated to the domain should be represented by nodes and relationships among nodes (links) to subsequently apply different operations on the graph thus created. Among these, the search for paths is one of the most frequent, which allows different kinds of problems to be resolved. Due to the frequency with which a large number of problems are resolved by applying path searches, a great number of papers have made a contribution to state-of-the-art algorithms to carry out such tasks. Nonetheless, due to the constant appearance of new requirements to be taken into account in graphs on which path searches are conducted, new proposals driven forward by the evolution of needs continue to arise, adding new nuances to the problem. Among these new characteristics, those which deserve special attention are the continuous growth of graphs, the fact that both the structure of graphs as well as the costs associated to nodes and links vary over time, and the appearance of new typologies arising from the size of graphs, as in the case of Small-World Networks. Also, as there are a large number of real-time or time-limited applications, this parameter is given priority above obtaining paths having the lowest possible cost, it being sufficient that their quality is within a specific margin with respect of the optimum. After having conducted a study of the state of the art, it was found that there are no papers on searching for paths quickly on large generically structured dynamic graphs. Hence, this doctoral thesis work is setting out a proposal based on Ant Colony Optimisation (ACO) algorithms to cover this gap and its objectives can be summed up as follows: short response times, large dynamic graphs and a generic typology. The fact of having chosen this kind of algorithms as a basis for the proposal is due to their proven capacity to adapt to dynamic environments, as well as on their efficiency in finding paths between nodes. Nevertheless, this type of algorithm has usually been applied to graphs having several thousand nodes at most, as it does not behave properly in larger graphs: the ants fail to find the destination or, should they do so, the quality of the path obtained is way below the optimum value. In order to allow for the use of larger graphs, some proposals incorporate the graph's pre-processing, so that the search is conducted on fragments of the main graph instead of on the complete graph. The problem with such pre-processing is that it entails a loss of adaptability characteristic of this kind of algorithm. Once a change has come about, the search has to be stopped and has to wait for the entire pre-processing to be executed once more. In order to put forward an algorithm capable of meeting all the objectives analysed above, the proposal contained herein incorporates a path search aid which in no way modifies the structure of the graph and which is based on the way animals manage to reduce the search space to find a food source by using their sense of smell. In order to achieve this, what have been called Food Nodes and Smell of Food, have been included in the algorithm, so that the ACO (Ant Colony Optimisation) algorithm has become the SoSACO (Sense of Smell ACO) algorithm. Along with this aid, it is necessary to select the best storage medium to process the massive amount of information to be used in the work, in addition to allowing concurrent access to such information. Both of these characteristics are fully covered through the use of a secondary support, more specifically a Database Management System, which allows powerful information management to be added, thereby improving access times. Furthermore, the use of a database management system will allow the problem of searching for paths in large graphs to be divided into two, making the task of searching for a path independent from the task of handling a massive amount of data. The management system will be in charge of the latter and allow this work to only deal with finding a search algorithm which is suitable for large dynamic graphs. The stages around which the proposal is structured are explained below: Choice of Food Nodes: Choosing a series of Food Nodes among all the graph's nodes based on the frequency with which the node is used as a path's end/init node; on the basis of the graph's distribution, allowing the graph to be divided into areas; on the basis of the node's degree; or on the basis of the frequency of transit. Once these have been chosen, they can be used as meeting points for ants (should the food node not be the destination node), so that when an ant reaches one of them it can directly obtain the overall path linking both sections if its destination node is the node of origin of another ant that has previously reached the food node. Smell of Food: The function of this new parameter will be to mimic the smell given off by any food source in nature, creating an area around it within which it is possible to know where the food source is to be found through the sense of smell. This characteristic has nothing to do with pheromone. While the latter is used by ants to communicate indirectly among themselves, smell will only be used as an aid in their search. Initial Dispersion of Smell: The dispersion of the smell from Food Nodes to the nodes around them, so that the strongest smell will be found at food nodes and the nodes around them will be assigned a quantity which will progressively decrease as one moves further away from the source of the smell. Additionally, a threshold will be set from which the smell will no longer be assigned to nodes, thereby limiting the size of areas of smell. This means that it is necessary to strike a balance between the number of food nodes and the threshold. V Radial Dispersion of Smell: The radial dispersion of the smell of food will come about when a food node is used to go from the starting node to the end node. This aims to mimic the fact that when a real ant gathers a piece of food, such food has a smell which will be dispersed along the path used by the ant to transport it. This fact means that the initial size of the areas of smell around food nodes will be rapidly extended, especially at frequently transited nodes thanks to the different searches made by ants. The smell will be dispersed, decreasing in value depending on the distance to the food node. In this case, the only stopping condition will be that the smell to be assigned has to be greater than zero. As a result of this entire process, the ant will search for its destination node by making use of the pheromone trail left by other ants in a probabilistic fashion, whilst maintaining its capacity to adapt to changes thanks to this random process. If it comes across a trail of smell during the search, it can use it to go to the food node (following the increasing trail of smell) and, if it is not the destination, to use it as a meeting point. From the explanation set out above, it can be seen that the addition made to the algorithm does not change either the graph's structure or the way ants usually go about doing their searches. Thus, if a change comes about which affects the smells or involves having to disperse them once again or deletes an already created area, the ants will not be interrupted and will be able to carry on with their search while whatever may be necessary to restore the smells is being done. Additionally, given that finding the path solely by using the trail of smell involves following it in an increasing direction, the first approximation to be found will be shown rapidly, thereby meeting the priority objective set for this thesis: namely reducing response times. In order to meet the remaining objectives and to greatly reduce the time employed to obtain the first response, the proposal gradually evolved through several stages at which values were set to determine the algorithm's parameters (thanks to the execution of a series of experiments over huge graphs), as well as by incorporating other new parameters which would enable the problems detected in preceding stages to be solved and new constraints to be included. In other words, an incremental iterative development method was followed. The algorithm's stages of evolution where based in the application of the algorithm to huge static graphs with a single food node, to huge static graphs with several food nodes, to huge dynamic graphs with several food nodes, and to huge static graphs with several food nodes but with a pre-processing of paths between selected food nodes and with a paths' maintenance (applying the SoSACO algorithm running parallel to the path searches for the services requested) in order to further improve response times and, in a secondary way, the quality of the paths. After completing all these stages of evolution, and studying the results of the experiments in each one, it can be stated that the proposed algorithm is capable of reducing the rate of lost ants and the response time, of adapting to changes in the graph, and of obtaining paths with quality within a specific margin with respect of the optimum when it executes over huge graphs. In other words, all the objectives initially set for this doctoral thesis have been met

    Using the ACO algorithm for path searches in social networks

    Get PDF
    The original publication is available at www.springerlink.comOne of the most important types of applications currently being used to share knowledge across the Internet are social networks. In addition to their use in social, professional and organizational spheres, social networks are also frequently utilized by researchers in the social sciences, particularly in anthropology and social psychology. In order to obtain information related to a particular social network, analytical techniques are employed to represent the network as a graph, where each node is a distinct member of the network and each edge is a particular type of relationship between members including, for example, kinship or friendship. This article presents a proposal for the efficien solution to one of the most frequently requested services on social networks; namely, taking different types of relationships into account in order to locate a particular member of the network. The solution is based on a biologically-inspired modificatio of the ant colony optimization algorithm.This study was funded through a competitive grant awarded by the Spanish Ministry of Education and Science for the THUBAN Project (TIN2008-02711) and through MA2VICMR consortium (S2009/TIC-1542, http://www.mavir.net), a network of excellence funded by the Madrid Regional Government.Publicad

    A Bio-Inspired Algorithm for Searching Relationships in Social Networks

    Get PDF
    Proceedings of: Third International Conference on Computational Aspects of Social Networks (CASoN).Took place 2011, October,19-21 , in Salamanca (Sapin).The event Web site is http://www.mirlabs.net/cason11/Nowadays the Social Networks are experiencing a growing importance. The reason of this is that they enable the information exchange among people, meeting people in the same field of work or establishing collaborations with other research groups. In order to manage social networks and to find people inside them, they are usually represented as graphs with persons as nodes and relationships between them as edges. Once this is done, establishing contact with anyone involves searching the chain of people to reach him/her, that is, the search of the path inside the graph which joins two nodes. In this paper, a new algorithm based on nature is proposed to realize this search: SoS-ACO (Sense of Smell - Ant Colony Optimization). This algorithm improves the classical ACO algorithm when it is applied in huge graphs.This study was funded through a competitive grant awarded by the Spanish Ministry of Education and Science for the THUBAN Project (TIN2008-02711) and through MA2VICMR consortium (S2009/TIC-1542, http://www.mavir.net), a network of excellence funded by the Madrid Regional Government.Publicad

    Interpretation and generation incremental management in natural interaction systems

    Get PDF
    Human interaction develops as an exchange of contributions between participants. The construction of a contribution is not an activity unilaterally created by the participant who produces it, but rather it constitutes a combined activity between the producer and the rest of the participants who take part in the interaction, by means of simultaneous feedback. This paper presents an incremental approach (without losing sight of how turns are produced throughout time), in which the interpretation of contributions is done as they take place, and the final generated contributions are the result of constant rectifications, reformulations and cancellations of the initially formulated contributions. The Continuity Manager and the Processes Coordinator components are proposed. The integration of these components in natural interaction systems allow for a joint approach to these problems. Both have been implemented and evaluated in a real framework called LaBDA-Interactor System which has been applied to the "dictation domain". We found that the degree of naturalness of this turn-taking approach is very close to the human one and it significantly improves the interaction cycle. (c) 2012 British Informatics Society Limited.The development of this approach and its construction as part of the Natural Interaction System LaBDA-Interactor has been par-tially supported by MA2VICMR (Regional Government of Madrid, S2009/TIC-1542), SemAnts (Spanish Ministry of Industry, Tourism and Trade, AVANZA I+D TSI-020110-2009-419); THUBAN (Spanish Ministry of Education and Science, TIN2008-02711); and ‘Access Channel to Digital Resources and Contents’ (Spanish Ministry of Education and Science, TSI-020501-2008-54).Publicad

    Optimasi Portofolio Resiko Menggunakan Model Markowitz MVO Dikaitkan dengan Keterbatasan Manusia dalam Memprediksi Masa Depan dalam Perspektif Al-Qur`an

    Full text link
    Risk portfolio on modern finance has become increasingly technical, requiring the use of sophisticated mathematical tools in both research and practice. Since companies cannot insure themselves completely against risk, as human incompetence in predicting the future precisely that written in Al-Quran surah Luqman verse 34, they have to manage it to yield an optimal portfolio. The objective here is to minimize the variance among all portfolios, or alternatively, to maximize expected return among all portfolios that has at least a certain expected return. Furthermore, this study focuses on optimizing risk portfolio so called Markowitz MVO (Mean-Variance Optimization). Some theoretical frameworks for analysis are arithmetic mean, geometric mean, variance, covariance, linear programming, and quadratic programming. Moreover, finding a minimum variance portfolio produces a convex quadratic programming, that is minimizing the objective function ðð¥with constraintsð ð 𥠥 ðandð´ð¥ = ð. The outcome of this research is the solution of optimal risk portofolio in some investments that could be finished smoothly using MATLAB R2007b software together with its graphic analysis

    Search for supersymmetry in events with one lepton and multiple jets in proton-proton collisions at root s=13 TeV

    Get PDF
    Peer reviewe

    Measurement of the top quark mass using charged particles in pp collisions at root s=8 TeV

    Get PDF
    Peer reviewe

    Search for Physics beyond the Standard Model in Events with Overlapping Photons and Jets

    Get PDF
    Results are reported from a search for new particles that decay into a photon and two gluons, in events with jets. Novel jet substructure techniques are developed that allow photons to be identified in an environment densely populated with hadrons. The analyzed proton-proton collision data were collected by the CMS experiment at the LHC, in 2016 at root s = 13 TeV, and correspond to an integrated luminosity of 35.9 fb(-1). The spectra of total transverse hadronic energy of candidate events are examined for deviations from the standard model predictions. No statistically significant excess is observed over the expected background. The first cross section limits on new physics processes resulting in such events are set. The results are interpreted as upper limits on the rate of gluino pair production, utilizing a simplified stealth supersymmetry model. The excluded gluino masses extend up to 1.7 TeV, for a neutralino mass of 200 GeV and exceed previous mass constraints set by analyses targeting events with isolated photons.Peer reviewe

    Measurement of the top quark forward-backward production asymmetry and the anomalous chromoelectric and chromomagnetic moments in pp collisions at √s = 13 TeV

    Get PDF
    Abstract The parton-level top quark (t) forward-backward asymmetry and the anomalous chromoelectric (d̂ t) and chromomagnetic (μ̂ t) moments have been measured using LHC pp collisions at a center-of-mass energy of 13 TeV, collected in the CMS detector in a data sample corresponding to an integrated luminosity of 35.9 fb−1. The linearized variable AFB(1) is used to approximate the asymmetry. Candidate t t ¯ events decaying to a muon or electron and jets in final states with low and high Lorentz boosts are selected and reconstructed using a fit of the kinematic distributions of the decay products to those expected for t t ¯ final states. The values found for the parameters are AFB(1)=0.048−0.087+0.095(stat)−0.029+0.020(syst),μ̂t=−0.024−0.009+0.013(stat)−0.011+0.016(syst), and a limit is placed on the magnitude of | d̂ t| < 0.03 at 95% confidence level. [Figure not available: see fulltext.

    Measurement of t(t)over-bar normalised multi-differential cross sections in pp collisions at root s=13 TeV, and simultaneous determination of the strong coupling strength, top quark pole mass, and parton distribution functions

    Get PDF
    Peer reviewe
    corecore